Deep Learning based Background Removal from Images using TensorFlow and Python

您所在的位置:网站首页 tensorflow how to use mask Deep Learning based Background Removal from Images using TensorFlow and Python

Deep Learning based Background Removal from Images using TensorFlow and Python

#Deep Learning based Background Removal from Images using TensorFlow and Python| 来源: 网络整理| 查看: 265

ADVERTISEMENT

In this tutorial, we are going to learn how to use deep learning to remove background from images with TensorFlow. In short, we’ll use DeepLabV3+, a semantic segmentation based model to extract the background and foreground mask from the image. We are going to use these masks to extract the background from the image and leave the foreground intact.

DOWNLOAD THE CODE Introduction

Image background removal is a popular task in the computer vision community. It is used to alter the background in images for a better user experience. For example, while clicking a selfie, you can blur the background (bokeh) using the portrait effect of the camera. This portrait effect is generated by extracting the background from the original image.

An example of the portrait effect.An example of the portrait effect. Source: https://ai.googleblog.com/2017/10/portrait-mode-on-pixel-2-and-pixel-2-xl.html

Here, we are going to use semantic segmentation algorithms for segmenting the background from images and applying it to alter the background. For this task, we are going to use the DeepLabV3+ architecture, which is trained on human image segmentation.

An example image and binary mask from the Human Image Segmentation dataset.An example image and binary mask from the Human Image Segmentation dataset.

To learn about the training of DeepLabV3+ on Human Image Segmentation, check the links:

Blog Post: Human Image Segmentation with DeepLabV3+ in TensorFlowYouTube Tutorial: Human Image Segmentation with DeepLabV3+ in TensorFlow 2.5.0 YouTube Tutorial

Watch the video for a better explanation of the process of “Remove Photo Background using Deep Learning in Python.”

The entire process of background removal:

Loading the trained DeepLabV3+ model in TensorFlow.Loading and reading all the images.Predicting the mask for all these images.Extracting the background and foreground mask.Using the masks to remove or change the background. Implementation

In the beginning, we import all the required classes, functions other stuff.

import os os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2" import numpy as np import cv2 from glob import glob from tqdm import tqdm import tensorflow as tf from tensorflow.keras.utils import CustomObjectScope from metrics import dice_loss, dice_coef,

Next, we define height and width as global parameters.

H = 512 W = 512

Here, we define a function name create_dir. This function would help in creating a folder or a directory.

def create_dir(path): if not os.path.exists(path): os.makedirs(path)

Next, we are going to seed the environment to make sure that things are reproducible. While training the DeepLabV3+ model, we have used the same seed.

np.random.seed(42) tf.random.set_seed(42)

Now, we are going to create an empty directory called remove_bg. This directory is used to store the processed images.

create_dir("remove_bg")

Here, we load the trained DeepLabV3+ model using the TensorFlow functions.

with CustomObjectScope({'iou': iou, 'dice_coef': dice_coef, 'dice_loss': dice_loss}): model = tf.keras.models.load_model("model.h5")

As we have loaded the trained model, we are going to take a look at the model summary.

model.summary() Model: "functional_1" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) [(None, 512, 512, 3) 0 __________________________________________________________________________________________________ conv1_pad (ZeroPadding2D) (None, 518, 518, 3) 0 input_1[0][0] __________________________________________________________________________________________________ conv1_conv (Conv2D) (None, 256, 256, 64) 9472 conv1_pad[0][0] __________________________________________________________________________________________________ conv1_bn (BatchNormalization) (None, 256, 256, 64) 256 conv1_conv[0][0] __________________________________________________________________________________________________ conv1_relu (Activation) (None, 256, 256, 64) 0 conv1_bn[0][0] __________________________________________________________________________________________________ pool1_pad (ZeroPadding2D) (None, 258, 258, 64) 0 conv1_relu[0][0] __________________________________________________________________________________________________ pool1_pool (MaxPooling2D) (None, 128, 128, 64) 0 pool1_pad[0][0] __________________________________________________________________________________________________ conv2_block1_1_conv (Conv2D) (None, 128, 128, 64) 4160 pool1_pool[0][0] __________________________________________________________________________________________________ conv2_block1_1_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block1_1_conv[0][0] __________________________________________________________________________________________________ conv2_block1_1_relu (Activation (None, 128, 128, 64) 0 conv2_block1_1_bn[0][0] __________________________________________________________________________________________________ conv2_block1_2_conv (Conv2D) (None, 128, 128, 64) 36928 conv2_block1_1_relu[0][0] __________________________________________________________________________________________________ conv2_block1_2_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block1_2_conv[0][0] __________________________________________________________________________________________________ conv2_block1_2_relu (Activation (None, 128, 128, 64) 0 conv2_block1_2_bn[0][0] __________________________________________________________________________________________________ conv2_block1_0_conv (Conv2D) (None, 128, 128, 256 16640 pool1_pool[0][0] __________________________________________________________________________________________________ conv2_block1_3_conv (Conv2D) (None, 128, 128, 256 16640 conv2_block1_2_relu[0][0] __________________________________________________________________________________________________ conv2_block1_0_bn (BatchNormali (None, 128, 128, 256 1024 conv2_block1_0_conv[0][0] __________________________________________________________________________________________________ conv2_block1_3_bn (BatchNormali (None, 128, 128, 256 1024 conv2_block1_3_conv[0][0] __________________________________________________________________________________________________ conv2_block1_add (Add) (None, 128, 128, 256 0 conv2_block1_0_bn[0][0] conv2_block1_3_bn[0][0] __________________________________________________________________________________________________ conv2_block1_out (Activation) (None, 128, 128, 256 0 conv2_block1_add[0][0] __________________________________________________________________________________________________ conv2_block2_1_conv (Conv2D) (None, 128, 128, 64) 16448 conv2_block1_out[0][0] __________________________________________________________________________________________________ conv2_block2_1_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block2_1_conv[0][0] __________________________________________________________________________________________________ conv2_block2_1_relu (Activation (None, 128, 128, 64) 0 conv2_block2_1_bn[0][0] __________________________________________________________________________________________________ conv2_block2_2_conv (Conv2D) (None, 128, 128, 64) 36928 conv2_block2_1_relu[0][0] __________________________________________________________________________________________________ conv2_block2_2_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block2_2_conv[0][0] __________________________________________________________________________________________________ conv2_block2_2_relu (Activation (None, 128, 128, 64) 0 conv2_block2_2_bn[0][0] __________________________________________________________________________________________________ conv2_block2_3_conv (Conv2D) (None, 128, 128, 256 16640 conv2_block2_2_relu[0][0] __________________________________________________________________________________________________ conv2_block2_3_bn (BatchNormali (None, 128, 128, 256 1024 conv2_block2_3_conv[0][0] __________________________________________________________________________________________________ conv2_block2_add (Add) (None, 128, 128, 256 0 conv2_block1_out[0][0] conv2_block2_3_bn[0][0] __________________________________________________________________________________________________ conv2_block2_out (Activation) (None, 128, 128, 256 0 conv2_block2_add[0][0] __________________________________________________________________________________________________ conv2_block3_1_conv (Conv2D) (None, 128, 128, 64) 16448 conv2_block2_out[0][0] __________________________________________________________________________________________________ conv2_block3_1_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block3_1_conv[0][0] __________________________________________________________________________________________________ conv2_block3_1_relu (Activation (None, 128, 128, 64) 0 conv2_block3_1_bn[0][0] __________________________________________________________________________________________________ conv2_block3_2_conv (Conv2D) (None, 128, 128, 64) 36928 conv2_block3_1_relu[0][0] __________________________________________________________________________________________________ conv2_block3_2_bn (BatchNormali (None, 128, 128, 64) 256 conv2_block3_2_conv[0][0] __________________________________________________________________________________________________ conv2_block3_2_relu (Activation (None, 128, 128, 64) 0 conv2_block3_2_bn[0][0] __________________________________________________________________________________________________ conv2_block3_3_conv (Conv2D) (None, 128, 128, 256 16640 conv2_block3_2_relu[0][0] __________________________________________________________________________________________________ conv2_block3_3_bn (BatchNormali (None, 128, 128, 256 1024 conv2_block3_3_conv[0][0] __________________________________________________________________________________________________ conv2_block3_add (Add) (None, 128, 128, 256 0 conv2_block2_out[0][0] conv2_block3_3_bn[0][0] __________________________________________________________________________________________________ conv2_block3_out (Activation) (None, 128, 128, 256 0 conv2_block3_add[0][0] __________________________________________________________________________________________________ conv3_block1_1_conv (Conv2D) (None, 64, 64, 128) 32896 conv2_block3_out[0][0] __________________________________________________________________________________________________ conv3_block1_1_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block1_1_conv[0][0] __________________________________________________________________________________________________ conv3_block1_1_relu (Activation (None, 64, 64, 128) 0 conv3_block1_1_bn[0][0] __________________________________________________________________________________________________ conv3_block1_2_conv (Conv2D) (None, 64, 64, 128) 147584 conv3_block1_1_relu[0][0] __________________________________________________________________________________________________ conv3_block1_2_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block1_2_conv[0][0] __________________________________________________________________________________________________ conv3_block1_2_relu (Activation (None, 64, 64, 128) 0 conv3_block1_2_bn[0][0] __________________________________________________________________________________________________ conv3_block1_0_conv (Conv2D) (None, 64, 64, 512) 131584 conv2_block3_out[0][0] __________________________________________________________________________________________________ conv3_block1_3_conv (Conv2D) (None, 64, 64, 512) 66048 conv3_block1_2_relu[0][0] __________________________________________________________________________________________________ conv3_block1_0_bn (BatchNormali (None, 64, 64, 512) 2048 conv3_block1_0_conv[0][0] __________________________________________________________________________________________________ conv3_block1_3_bn (BatchNormali (None, 64, 64, 512) 2048 conv3_block1_3_conv[0][0] __________________________________________________________________________________________________ conv3_block1_add (Add) (None, 64, 64, 512) 0 conv3_block1_0_bn[0][0] conv3_block1_3_bn[0][0] __________________________________________________________________________________________________ conv3_block1_out (Activation) (None, 64, 64, 512) 0 conv3_block1_add[0][0] __________________________________________________________________________________________________ conv3_block2_1_conv (Conv2D) (None, 64, 64, 128) 65664 conv3_block1_out[0][0] __________________________________________________________________________________________________ conv3_block2_1_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block2_1_conv[0][0] __________________________________________________________________________________________________ conv3_block2_1_relu (Activation (None, 64, 64, 128) 0 conv3_block2_1_bn[0][0] __________________________________________________________________________________________________ conv3_block2_2_conv (Conv2D) (None, 64, 64, 128) 147584 conv3_block2_1_relu[0][0] __________________________________________________________________________________________________ conv3_block2_2_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block2_2_conv[0][0] __________________________________________________________________________________________________ conv3_block2_2_relu (Activation (None, 64, 64, 128) 0 conv3_block2_2_bn[0][0] __________________________________________________________________________________________________ conv3_block2_3_conv (Conv2D) (None, 64, 64, 512) 66048 conv3_block2_2_relu[0][0] __________________________________________________________________________________________________ conv3_block2_3_bn (BatchNormali (None, 64, 64, 512) 2048 conv3_block2_3_conv[0][0] __________________________________________________________________________________________________ conv3_block2_add (Add) (None, 64, 64, 512) 0 conv3_block1_out[0][0] conv3_block2_3_bn[0][0] __________________________________________________________________________________________________ conv3_block2_out (Activation) (None, 64, 64, 512) 0 conv3_block2_add[0][0] __________________________________________________________________________________________________ conv3_block3_1_conv (Conv2D) (None, 64, 64, 128) 65664 conv3_block2_out[0][0] __________________________________________________________________________________________________ conv3_block3_1_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block3_1_conv[0][0] __________________________________________________________________________________________________ conv3_block3_1_relu (Activation (None, 64, 64, 128) 0 conv3_block3_1_bn[0][0] __________________________________________________________________________________________________ conv3_block3_2_conv (Conv2D) (None, 64, 64, 128) 147584 conv3_block3_1_relu[0][0] __________________________________________________________________________________________________ conv3_block3_2_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block3_2_conv[0][0] __________________________________________________________________________________________________ conv3_block3_2_relu (Activation (None, 64, 64, 128) 0 conv3_block3_2_bn[0][0] __________________________________________________________________________________________________ conv3_block3_3_conv (Conv2D) (None, 64, 64, 512) 66048 conv3_block3_2_relu[0][0] __________________________________________________________________________________________________ conv3_block3_3_bn (BatchNormali (None, 64, 64, 512) 2048 conv3_block3_3_conv[0][0] __________________________________________________________________________________________________ conv3_block3_add (Add) (None, 64, 64, 512) 0 conv3_block2_out[0][0] conv3_block3_3_bn[0][0] __________________________________________________________________________________________________ conv3_block3_out (Activation) (None, 64, 64, 512) 0 conv3_block3_add[0][0] __________________________________________________________________________________________________ conv3_block4_1_conv (Conv2D) (None, 64, 64, 128) 65664 conv3_block3_out[0][0] __________________________________________________________________________________________________ conv3_block4_1_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block4_1_conv[0][0] __________________________________________________________________________________________________ conv3_block4_1_relu (Activation (None, 64, 64, 128) 0 conv3_block4_1_bn[0][0] __________________________________________________________________________________________________ conv3_block4_2_conv (Conv2D) (None, 64, 64, 128) 147584 conv3_block4_1_relu[0][0] __________________________________________________________________________________________________ conv3_block4_2_bn (BatchNormali (None, 64, 64, 128) 512 conv3_block4_2_conv[0][0] __________________________________________________________________________________________________ conv3_block4_2_relu (Activation (None, 64, 64, 128) 0 conv3_block4_2_bn[0][0] __________________________________________________________________________________________________ conv3_block4_3_conv (Conv2D) (None, 64, 64, 512) 66048 conv3_block4_2_relu[0][0] __________________________________________________________________________________________________ conv3_block4_3_bn (BatchNormali (None, 64, 64, 512) 2048 conv3_block4_3_conv[0][0] __________________________________________________________________________________________________ conv3_block4_add (Add) (None, 64, 64, 512) 0 conv3_block3_out[0][0] conv3_block4_3_bn[0][0] __________________________________________________________________________________________________ conv3_block4_out (Activation) (None, 64, 64, 512) 0 conv3_block4_add[0][0] __________________________________________________________________________________________________ conv4_block1_1_conv (Conv2D) (None, 32, 32, 256) 131328 conv3_block4_out[0][0] __________________________________________________________________________________________________ conv4_block1_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block1_1_conv[0][0] __________________________________________________________________________________________________ conv4_block1_1_relu (Activation (None, 32, 32, 256) 0 conv4_block1_1_bn[0][0] __________________________________________________________________________________________________ conv4_block1_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block1_1_relu[0][0] __________________________________________________________________________________________________ conv4_block1_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block1_2_conv[0][0] __________________________________________________________________________________________________ conv4_block1_2_relu (Activation (None, 32, 32, 256) 0 conv4_block1_2_bn[0][0] __________________________________________________________________________________________________ conv4_block1_0_conv (Conv2D) (None, 32, 32, 1024) 525312 conv3_block4_out[0][0] __________________________________________________________________________________________________ conv4_block1_3_conv (Conv2D) (None, 32, 32, 1024) 263168 conv4_block1_2_relu[0][0] __________________________________________________________________________________________________ conv4_block1_0_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block1_0_conv[0][0] __________________________________________________________________________________________________ conv4_block1_3_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block1_3_conv[0][0] __________________________________________________________________________________________________ conv4_block1_add (Add) (None, 32, 32, 1024) 0 conv4_block1_0_bn[0][0] conv4_block1_3_bn[0][0] __________________________________________________________________________________________________ conv4_block1_out (Activation) (None, 32, 32, 1024) 0 conv4_block1_add[0][0] __________________________________________________________________________________________________ conv4_block2_1_conv (Conv2D) (None, 32, 32, 256) 262400 conv4_block1_out[0][0] __________________________________________________________________________________________________ conv4_block2_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block2_1_conv[0][0] __________________________________________________________________________________________________ conv4_block2_1_relu (Activation (None, 32, 32, 256) 0 conv4_block2_1_bn[0][0] __________________________________________________________________________________________________ conv4_block2_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block2_1_relu[0][0] __________________________________________________________________________________________________ conv4_block2_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block2_2_conv[0][0] __________________________________________________________________________________________________ conv4_block2_2_relu (Activation (None, 32, 32, 256) 0 conv4_block2_2_bn[0][0] __________________________________________________________________________________________________ conv4_block2_3_conv (Conv2D) (None, 32, 32, 1024) 263168 conv4_block2_2_relu[0][0] __________________________________________________________________________________________________ conv4_block2_3_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block2_3_conv[0][0] __________________________________________________________________________________________________ conv4_block2_add (Add) (None, 32, 32, 1024) 0 conv4_block1_out[0][0] conv4_block2_3_bn[0][0] __________________________________________________________________________________________________ conv4_block2_out (Activation) (None, 32, 32, 1024) 0 conv4_block2_add[0][0] __________________________________________________________________________________________________ conv4_block3_1_conv (Conv2D) (None, 32, 32, 256) 262400 conv4_block2_out[0][0] __________________________________________________________________________________________________ conv4_block3_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block3_1_conv[0][0] __________________________________________________________________________________________________ conv4_block3_1_relu (Activation (None, 32, 32, 256) 0 conv4_block3_1_bn[0][0] __________________________________________________________________________________________________ conv4_block3_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block3_1_relu[0][0] __________________________________________________________________________________________________ conv4_block3_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block3_2_conv[0][0] __________________________________________________________________________________________________ conv4_block3_2_relu (Activation (None, 32, 32, 256) 0 conv4_block3_2_bn[0][0] __________________________________________________________________________________________________ conv4_block3_3_conv (Conv2D) (None, 32, 32, 1024) 263168 conv4_block3_2_relu[0][0] __________________________________________________________________________________________________ conv4_block3_3_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block3_3_conv[0][0] __________________________________________________________________________________________________ conv4_block3_add (Add) (None, 32, 32, 1024) 0 conv4_block2_out[0][0] conv4_block3_3_bn[0][0] __________________________________________________________________________________________________ conv4_block3_out (Activation) (None, 32, 32, 1024) 0 conv4_block3_add[0][0] __________________________________________________________________________________________________ conv4_block4_1_conv (Conv2D) (None, 32, 32, 256) 262400 conv4_block3_out[0][0] __________________________________________________________________________________________________ conv4_block4_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block4_1_conv[0][0] __________________________________________________________________________________________________ conv4_block4_1_relu (Activation (None, 32, 32, 256) 0 conv4_block4_1_bn[0][0] __________________________________________________________________________________________________ conv4_block4_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block4_1_relu[0][0] __________________________________________________________________________________________________ conv4_block4_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block4_2_conv[0][0] __________________________________________________________________________________________________ conv4_block4_2_relu (Activation (None, 32, 32, 256) 0 conv4_block4_2_bn[0][0] __________________________________________________________________________________________________ conv4_block4_3_conv (Conv2D) (None, 32, 32, 1024) 263168 conv4_block4_2_relu[0][0] __________________________________________________________________________________________________ conv4_block4_3_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block4_3_conv[0][0] __________________________________________________________________________________________________ conv4_block4_add (Add) (None, 32, 32, 1024) 0 conv4_block3_out[0][0] conv4_block4_3_bn[0][0] __________________________________________________________________________________________________ conv4_block4_out (Activation) (None, 32, 32, 1024) 0 conv4_block4_add[0][0] __________________________________________________________________________________________________ conv4_block5_1_conv (Conv2D) (None, 32, 32, 256) 262400 conv4_block4_out[0][0] __________________________________________________________________________________________________ conv4_block5_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block5_1_conv[0][0] __________________________________________________________________________________________________ conv4_block5_1_relu (Activation (None, 32, 32, 256) 0 conv4_block5_1_bn[0][0] __________________________________________________________________________________________________ conv4_block5_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block5_1_relu[0][0] __________________________________________________________________________________________________ conv4_block5_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block5_2_conv[0][0] __________________________________________________________________________________________________ conv4_block5_2_relu (Activation (None, 32, 32, 256) 0 conv4_block5_2_bn[0][0] __________________________________________________________________________________________________ conv4_block5_3_conv (Conv2D) (None, 32, 32, 1024) 263168 conv4_block5_2_relu[0][0] __________________________________________________________________________________________________ conv4_block5_3_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block5_3_conv[0][0] __________________________________________________________________________________________________ conv4_block5_add (Add) (None, 32, 32, 1024) 0 conv4_block4_out[0][0] conv4_block5_3_bn[0][0] __________________________________________________________________________________________________ conv4_block5_out (Activation) (None, 32, 32, 1024) 0 conv4_block5_add[0][0] __________________________________________________________________________________________________ conv4_block6_1_conv (Conv2D) (None, 32, 32, 256) 262400 conv4_block5_out[0][0] __________________________________________________________________________________________________ conv4_block6_1_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block6_1_conv[0][0] __________________________________________________________________________________________________ conv4_block6_1_relu (Activation (None, 32, 32, 256) 0 conv4_block6_1_bn[0][0] __________________________________________________________________________________________________ conv4_block6_2_conv (Conv2D) (None, 32, 32, 256) 590080 conv4_block6_1_relu[0][0] __________________________________________________________________________________________________ conv4_block6_2_bn (BatchNormali (None, 32, 32, 256) 1024 conv4_block6_2_conv[0][0] __________________________________________________________________________________________________ conv4_block6_2_relu (Activation (None, 32, 32, 256) 0 conv4_block6_2_bn[0][0] __________________________________________________________________________________________________ conv4_block6_3_conv (Conv2D) (None, 32, 32, 1024) 263168 conv4_block6_2_relu[0][0] __________________________________________________________________________________________________ conv4_block6_3_bn (BatchNormali (None, 32, 32, 1024) 4096 conv4_block6_3_conv[0][0] __________________________________________________________________________________________________ conv4_block6_add (Add) (None, 32, 32, 1024) 0 conv4_block5_out[0][0] conv4_block6_3_bn[0][0] __________________________________________________________________________________________________ conv4_block6_out (Activation) (None, 32, 32, 1024) 0 conv4_block6_add[0][0] __________________________________________________________________________________________________ average_pooling (AveragePooling (None, 1, 1, 1024) 0 conv4_block6_out[0][0] __________________________________________________________________________________________________ conv2d (Conv2D) (None, 1, 1, 256) 262144 average_pooling[0][0] __________________________________________________________________________________________________ bn_1 (BatchNormalization) (None, 1, 1, 256) 1024 conv2d[0][0] __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 32, 32, 256) 262144 conv4_block6_out[0][0] __________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 32, 32, 256) 2359296 conv4_block6_out[0][0] __________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 32, 32, 256) 2359296 conv4_block6_out[0][0] __________________________________________________________________________________________________ conv2d_4 (Conv2D) (None, 32, 32, 256) 2359296 conv4_block6_out[0][0] __________________________________________________________________________________________________ relu_1 (Activation) (None, 1, 1, 256) 0 bn_1[0][0] __________________________________________________________________________________________________ batch_normalization (BatchNorma (None, 32, 32, 256) 1024 conv2d_1[0][0] __________________________________________________________________________________________________ batch_normalization_1 (BatchNor (None, 32, 32, 256) 1024 conv2d_2[0][0] __________________________________________________________________________________________________ batch_normalization_2 (BatchNor (None, 32, 32, 256) 1024 conv2d_3[0][0] __________________________________________________________________________________________________ batch_normalization_3 (BatchNor (None, 32, 32, 256) 1024 conv2d_4[0][0] __________________________________________________________________________________________________ up_sampling2d (UpSampling2D) (None, 32, 32, 256) 0 relu_1[0][0] __________________________________________________________________________________________________ activation (Activation) (None, 32, 32, 256) 0 batch_normalization[0][0] __________________________________________________________________________________________________ activation_1 (Activation) (None, 32, 32, 256) 0 batch_normalization_1[0][0] __________________________________________________________________________________________________ activation_2 (Activation) (None, 32, 32, 256) 0 batch_normalization_2[0][0] __________________________________________________________________________________________________ activation_3 (Activation) (None, 32, 32, 256) 0 batch_normalization_3[0][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 32, 32, 1280) 0 up_sampling2d[0][0] activation[0][0] activation_1[0][0] activation_2[0][0] activation_3[0][0] __________________________________________________________________________________________________ conv2d_5 (Conv2D) (None, 32, 32, 256) 327680 concatenate[0][0] __________________________________________________________________________________________________ batch_normalization_4 (BatchNor (None, 32, 32, 256) 1024 conv2d_5[0][0] __________________________________________________________________________________________________ conv2d_6 (Conv2D) (None, 128, 128, 48) 12288 conv2_block2_out[0][0] __________________________________________________________________________________________________ activation_4 (Activation) (None, 32, 32, 256) 0 batch_normalization_4[0][0] __________________________________________________________________________________________________ batch_normalization_5 (BatchNor (None, 128, 128, 48) 192 conv2d_6[0][0] __________________________________________________________________________________________________ up_sampling2d_1 (UpSampling2D) (None, 128, 128, 256 0 activation_4[0][0] __________________________________________________________________________________________________ activation_5 (Activation) (None, 128, 128, 48) 0 batch_normalization_5[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 128, 128, 304 0 up_sampling2d_1[0][0] activation_5[0][0] __________________________________________________________________________________________________ global_average_pooling2d (Globa (None, 304) 0 concatenate_1[0][0] __________________________________________________________________________________________________ reshape (Reshape) (None, 1, 1, 304) 0 global_average_pooling2d[0][0] __________________________________________________________________________________________________ dense (Dense) (None, 1, 1, 38) 11552 reshape[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 1, 1, 304) 11552 dense[0][0] __________________________________________________________________________________________________ tf_op_layer_Mul (TensorFlowOpLa (None, 128, 128, 304 0 concatenate_1[0][0] dense_1[0][0] __________________________________________________________________________________________________ conv2d_7 (Conv2D) (None, 128, 128, 256 700416 tf_op_layer_Mul[0][0] __________________________________________________________________________________________________ batch_normalization_6 (BatchNor (None, 128, 128, 256 1024 conv2d_7[0][0] __________________________________________________________________________________________________ activation_6 (Activation) (None, 128, 128, 256 0 batch_normalization_6[0][0] __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 128, 128, 256 589824 activation_6[0][0] __________________________________________________________________________________________________ batch_normalization_7 (BatchNor (None, 128, 128, 256 1024 conv2d_8[0][0] __________________________________________________________________________________________________ activation_7 (Activation) (None, 128, 128, 256 0 batch_normalization_7[0][0] __________________________________________________________________________________________________ global_average_pooling2d_1 (Glo (None, 256) 0 activation_7[0][0] __________________________________________________________________________________________________ reshape_1 (Reshape) (None, 1, 1, 256) 0 global_average_pooling2d_1[0][0] __________________________________________________________________________________________________ dense_2 (Dense) (None, 1, 1, 32) 8192 reshape_1[0][0] __________________________________________________________________________________________________ dense_3 (Dense) (None, 1, 1, 256) 8192 dense_2[0][0] __________________________________________________________________________________________________ tf_op_layer_Mul_1 (TensorFlowOp (None, 128, 128, 256 0 activation_7[0][0] dense_3[0][0] __________________________________________________________________________________________________ up_sampling2d_2 (UpSampling2D) (None, 512, 512, 256 0 tf_op_layer_Mul_1[0][0] __________________________________________________________________________________________________ output_layer (Conv2D) (None, 512, 512, 1) 257 up_sampling2d_2[0][0] __________________________________________________________________________________________________ activation_8 (Activation) (None, 512, 512, 1) 0 output_layer[0][0] ================================================================================================== Total params: 17,869,697 Trainable params: 17,834,913 Non-trainable params: 34,784

Now, we select all the images, whose background needs to be removed.

data_x = glob("images/*")

Next, we are going to loop over all the images and try to extract the name from the image path.

for path in tqdm(data_x, total=len(data_x)): """ Extracting name """ name = path.split("/")[-1].split(".")[0]

Here, we read the image as an RGB using the OpenCV.

""" Read the image """ image = cv2.imread(path, cv2.IMREAD_COLOR) h, w, _ = image.shape x = cv2.resize(image, (W, H)) x = x/255.0 x = x.astype(np.float32) x = np.expand_dims(x, axis=0)

In the above code, we do the following:

Give the image path to OpenCV cv2.imread fuction to read the image as RGB.We save the original height and width of the image before resizing.Resizing the image to 512 x 512 pixels.Normalizaing all the pixel values, by dividing it with 255.0 i.e., maximum pixel value. Change the datatype of the image numpy array to float32.Expand the dimension of the numpy array on the first axis. [512 x 512 x 3] TO [1 x 512 x 512 x 3]

The processed original image is now fed to the DeepLabV3+ model as input, which predicts the binary mask.

""" Prediction """ y = model.predict(x)[0] y = cv2.resize(y, (w, h)) y = np.expand_dims(y, axis=-1) y = y > 0.5 photo_mask = y background_mask = np.abs(1-y)

In the above code, we perform the following:

Predict the mask.Resize the predicting mask to original height and width. [512 x 512] TO [h x w]Expanding the dimension on the last axis.As the mask contains the pixel value between the range of 0 and 1. So, we apply 0.5 as the threshold value to convert the pixel value to either 0 or 1.The photo_mask (predicted mask) is for the segmenting the human or person present within the image. The background_mask is for segmenting the background region of the image. It is generated by inversing the photo_mask. An example image.

The above image is used as an input, resized and passed to the trained DeepLabV3+ model. Now, we are going to take a look at them:

Foreground Mask (photo_mask)Background Mask (background_mask) cv2.imwrite(f"remove_bg/{name}.png", photo_mask*255) cv2.imwrite(f"remove_bg/{name}.png", background_mask*255) Foreground and background mask for the original image.Foreground and background mask for the original image. cv2.imwrite(f"remove_bg/{name}.png", image * photo_mask) cv2.imwrite(f"remove_bg/{name}.png", image * background_mask) Foreground image and the background extracted from the original image.Foreground image and the background extracted from the original image. masked_photo = image * photo_mask background_mask = np.concatenate([background_mask, background_mask, background_mask], axis=-1) background_mask = background_mask * [0, 0, 255] final_photo = masked_photo + background_mask cv2.imwrite(f"remove_bg/{name}.png", final_photo)

In the above code:

Element-wise multiplication between the original image and the foreground mask (photo_mask) to extract the foreground image.Concatenate three background_mask (numpy array) to convert it from [H x W x 1] to [H x W x 3].Multiply the background mask with the required color code in the BGR format. Here, we specified the RED [0, 0, 255].Add the foreground image and background mask with RED color.Saving the image. Two examples of the image where the background was removed and then changes to red color.Two examples of the image where the background was removed and then changes to red color. Summary

In this tutorial, we have learned to remove background from human images using a semantic segmentation model known as DeepLabV3+.

If you have any questions or queries. Contact me:

YouTubeFacebookTwitterInstagram

IDIOT DEVELOPER – https://www.youtube.com/c/IdiotDeveloper/?sub_confirmation=1



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3